This audio is presented by the University of Erlangen-Nürnberg.
I would like to wrap up this topic today and then start with regression.
First of all, a small slide correction from what I wrote yesterday.
We introduced the letter rho, which is the degree of randomness.
Yesterday I said that these are the features that are selected, but this is unnecessarily narrow and it will collide with today's use of rho.
Let's formulate this a little bit more general.
Rho is a random instance with which we train a node. We select a phi, a psi, and a tau.
This denotes a randomized training instance at a node.
This means one particular choice
of a feature subset phi.
This is, so yesterday we defined it just for phi, a weak learner psi
and parameters tau.
In our examples today, we fixed the feature subset.
We always look at this two-dimensional space and we fix a function psi.
When we vary rho, then it just means how many parameters do we try out for this particular example.
In our examples today, phi and psi are fixed.
The magnitude of rho just indicates
how many different parameters tau
we try out during node training.
I'm just standing in front of the blackboard.
I'm just standing in front of the blackboard.
Everything else should be fine.
What we are going to do now is to look at a couple of illustrative examples from the textbook by Kriminesi and Schotten.
They demonstrate the learning behavior of the trees a little bit more when varying the parameters.
I think at the two most important cases we looked already yesterday, the influence of the forest size and tree depth.
This is tiny, particularly if you're sitting very far behind.
Nevertheless, we just need to perceive the colors and then I think we're good.
This is just restating our example from yesterday on the influence of the forest size.
If we have one tree, we decide to split at some point and say, left is this class yellow, right is the class red.
If we increase the forest size, if we take multiple trees, then more of these splitting lines are introduced
and our class labels are always an average of the single tree predictions.
What we obtain is a blending behavior between yellow and red with increasing forest size.
That's what we wrote down yesterday.
Here's an example just demonstrating how we can do multi-class.
We also talked about that yesterday and how things behave once we start to add a little bit of noise.
This is a two-class case and a spiral to make the problem a little bit more interesting.
Here we have a four-class case indicated by the different colors of these dots.
Then the prediction is indicated here by these colors.
In the four-class case, colored in four colors, we see it really gets the shape of this spiral.
What we see in the bottom-most row are the entropy images.
The entropy is more or less the confidence that we have in our class assignments.
Black means we are relatively sure that all these samples belong to one class.
Our histogram is sort of boring because we have just examples of one class there.
While a brighter color means we observe multiple samples.
Our histogram at a leaf node doesn't clearly indicate which class it belongs to.
In this four-class case here, and this is a noise-free case,
we see that we get a relatively sharp delineation of the class boundaries.
In the right-most column here is an example where a little bit of noise is added to the data.
Essentially, these are still spirals, but even when you look at it, it's not so obvious.
Apparently, to the tree, it's also not so obvious.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:27:46 Min
Aufnahmedatum
2015-05-12
Hochgeladen am
2015-05-12 11:03:00
Sprache
en-US
This lecture first supplement the methods of preprocessing presented in Pattern Recognition 1 by some operations useful for image processing. In addition several approaches to image segmentation are shown, like edge detection, recognition of regions and textures and motion computation in image sequences. In the area of speech processing approaches to segmentation of speech signals are discussed as well as vector quantization and the theory of Hidden Markov Models.
Accordingly several methods for object recognition are shown. Above that different control strategies usable for pattern analysis systems are presented and therefore also several control algorithms e.g. the A(star) - algorithm.
Finally some formalisms for knowledge representation in pattern analysis systems and knowledge-based pattern analysis are introduced.